14 research outputs found

    Groupwise Multimodal Image Registration Using Joint Total Variation

    Get PDF
    In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv

    Machine phenotyping of cluster headache and its response to verapamil

    Get PDF
    Cluster headache is characterized by recurrent, unilateral attacks of excruciating pain associated with ipsilateral cranial autonomic symptoms. Although a wide array of clinical, anatomical, physiological, and genetic data have informed multiple theories about the underlying pathophysiology, the lack of a comprehensive mechanistic understanding has inhibited, on the one hand, the development of new treatments and, on the other, the identification of features predictive of response to established ones. The first-line drug, verapamil, is found to be effective in only half of all patients, and after several weeks of dose escalation, rendering therapeutic selection both uncertain and slow. Here we use high-dimensional modelling of routinely acquired phenotypic and MRI data to quantify the predictability of verapamil responsiveness and to illuminate its neural dependants, across a cohort of 708 patients evaluated for cluster headache at the National Hospital for Neurology and Neurosurgery between 2007 and 2017. We derive a succinct latent representation of cluster headache from non-linear dimensionality reduction of structured clinical features, revealing novel phenotypic clusters. In a subset of patients, we show that individually predictive models based on gradient boosting machines can predict verapamil responsiveness from clinical (410 patients) and imaging (194 patients) features. Models combining clinical and imaging data establish the first benchmark for predicting verapamil responsiveness, with an area under the receiver operating characteristic curve of 0.689 on cross-validation (95% confidence interval: 0.651 to 0.710) and 0.621 on held-out data. In the imaged patients, voxel-based morphometry revealed a grey matter cluster in lobule VI of the cerebellum (–4, –66, –20) exhibiting enhanced grey matter concentrations in verapamil non-responders compared with responders (familywise error-corrected P = 0.008, 29 voxels). We propose a mechanism for the therapeutic effect of verapamil that draws on the neuroanatomy and neurochemistry of the identified region. Our results reveal previously unrecognized high-dimensional structure within the phenotypic landscape of cluster headache that enables prediction of treatment response with modest fidelity. An analogous approach applied to larger, globally representative datasets could facilitate data-driven redefinition of diagnostic criteria and stronger, more generalizable predictive models of treatment responsiveness

    MRI super-resolution using multi-channel total variation

    Get PDF
    This paper presents a generative model for super-resolution in routine clinical magnetic resonance images (MRI), of arbitrary orientation and contrast. The model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice-select profile of the MR scanner. The paper introduces a prior based on multi-channel total variation for MRI super-resolution. Bias-variance trade-off is handled by estimating hyper-parameters from the low resolution input scans. The model was validated on a large database of brain images. The validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different MR contrasts, and that it generalises well to the large variability present in MR images of different subjects

    An MRF-UNet Product of Experts for Image Segmentation

    Get PDF
    While convolutional neural networks (CNNs) trained by back-propagation have seen unprecedented success at semantic segmentation tasks, they are known to struggle on out-of-distribution data. Markov random fields (MRFs) on the other hand, encode simpler distributions over labels that, although less flexible than UNets, are less prone to over-fitting. In this paper, we propose to fuse both strategies by computing the product of distributions of a UNet and an MRF. As this product is intractable, we solve for an approximate distribution using an iterative mean-field approach. The resulting MRF-UNet is trained jointly by back-propagation. Compared to other works using conditional random fields (CRFs), the MRF has no dependency on the imaging data, which should allow for less over-fitting. We show on 3D neuroimaging data that this novel network improves generalisation to out-of-distribution samples. Furthermore, it allows the overall number of parameters to be reduced while preserving high accuracy. These results suggest that a classic MRF smoothness prior can allow for less over-fitting when principally integrated into a CNN model. Our implementation is available at https://github.com/balbasty/nitorch

    Empirical Bayesian Mixture Models for Medical Image Translation

    Get PDF
    Automatically generating one medical imaging modality from another is known as medical image translation, and has numerous interesting applications. This paper presents an interpretable generative modelling approach to medical image translation. By allowing a common model for group-wise normalisation and segmentation of brain scans to handle missing data, the model allows for predicting entirely missing modalities from one, or a few, MR contrasts. Furthermore, the model can be trained on a fairly small number of subjects. The proposed model is validated on three clinically relevant scenarios. Results appear promising and show that a principled, probabilistic model of the relationship between multi-channel signal intensities can be used to infer missing modalities -- both MR contrasts and CT images.Comment: Accepted to the Simulation and Synthesis in Medical Imaging (SASHIMI) workshop at MICCAI 201

    ABCD Neurocognitive Prediction Challenge 2019: Predicting Individual Residual Fluid Intelligence Scores from Cortical Grey Matter Morphology

    Get PDF
    We predicted fluid intelligence from T1-weighted MRI data available as part of the ABCD NP Challenge 2019, using morphological similarity of grey-matter regions across the cortex. Individual structural covariance networks (SCN) were abstracted into graph-theory metrics averaged over nodes across the brain and in data-driven communities/modules. Metrics included degree, path length, clustering coefficient, centrality, rich club coefficient, and small-worldness. These features derived from the training set were used to build various regression models for predicting residual fluid intelligence scores, with performance evaluated both using cross-validation within the training set and using the held-out validation set. Our predictions on the test set were generated with a support vector regression model trained on the training set. We found minimal improvement over predicting a zero residual fluid intelligence score across the sample population, implying that structural covariance networks calculated from T1-weighted MR imaging data provide little information about residual fluid intelligence

    Groupwise Multimodal Image Registration using Joint Total Variation

    Get PDF
    In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv

    Flexible Bayesian Modelling for Nonlinear Image Registration

    Get PDF
    We describe a diffeomorphic registration algorithm that allows groups of images to be accurately aligned to a common space, which we intend to incorporate into the SPM software. The idea is to perform inference in a probabilistic graphical model that accounts for variability in both shape and appearance. The resulting framework is general and entirely unsupervised. The model is evaluated at inter-subject registration of 3D human brain scans. Here, the main modeling assumption is that individual anatomies can be generated by deforming a latent 'average' brain. The method is agnostic to imaging modality and can be applied with no prior processing. We evaluate the algorithm using freely available, manually labelled datasets. In this validation we achieve state-of-the-art results, within reasonable runtimes, against previous state-of-the-art widely used, inter-subject registration algorithms. On the unprocessed dataset, the increase in overlap score is over 17%. These results demonstrate the benefits of using informative computational anatomy frameworks for nonlinear registration.Comment: Accepted for MICCAI 202

    Outcome after acute ischemic stroke is linked to sex-specific lesion patterns

    Get PDF
    Acute ischemic stroke affects men and women differently. In particular, women are often reported to experience higher acute stroke severity than men. We derived a low-dimensional representation of anatomical stroke lesions and designed a Bayesian hierarchical modeling framework tailored to estimate possible sex differences in lesion patterns linked to acute stroke severity (National Institute of Health Stroke Scale). This framework was developed in 555 patients (38% female). Findings were validated in an independent cohort (n = 503, 41% female). Here, we show brain lesions in regions subserving motor and language functions help explain stroke severity in both men and women, however more widespread lesion patterns are relevant in female patients. Higher stroke severity in women, but not men, is associated with left hemisphere lesions in the vicinity of the posterior circulation. Our results suggest there are sex-specific functional cerebral asymmetries that may be important for future investigations of sex-stratified approaches to management of acute ischemic stroke

    Fitting Segmentation Networks on Varying Image Resolutions using Splatting

    No full text
    Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step
    corecore